Speech-driven 3D facial animation has been widely explored, with applications in gaming, character animation, virtual reality, and telepresence systems. State-of-the-art methods deform the face topology of the target actor to sync the input audio without considering the identity-specific speaking style and facial idiosyncrasies of the target actor, thus, resulting in unrealistic and inaccurate lip movements. To address this, we present Imitator, a speech-driven facial expression synthesis method, which learns identity-specific details from a short input video and produces novel facial expressions matching the identity-specific speaking style and facial idiosyncrasies of the target actor. Specifically, we train a style-agnostic transformer on a large facial expression dataset which we use as a prior for audio-driven facial expressions. Based on this prior, we optimize for identity-specific speaking style based on a short reference video. To train the prior, we introduce a novel loss function based on detected bilabial consonants to ensure plausible lip closures and consequently improve the realism of the generated expressions. Through detailed experiments and a user study, we show that our approach produces temporally coherent facial expressions from input audio while preserving the speaking style of the target actors.
translated by 谷歌翻译
Egocentric 3D human pose estimation with a single head-mounted fisheye camera has recently attracted attention due to its numerous applications in virtual and augmented reality. Existing methods still struggle in challenging poses where the human body is highly occluded or is closely interacting with the scene. To address this issue, we propose a scene-aware egocentric pose estimation method that guides the prediction of the egocentric pose with scene constraints. To this end, we propose an egocentric depth estimation network to predict the scene depth map from a wide-view egocentric fisheye camera while mitigating the occlusion of the human body with a depth-inpainting network. Next, we propose a scene-aware pose estimation network that projects the 2D image features and estimated depth map of the scene into a voxel space and regresses the 3D pose with a V2V network. The voxel-based feature representation provides the direct geometric connection between 2D image features and scene geometry, and further facilitates the V2V network to constrain the predicted pose based on the estimated scene geometry. To enable the training of the aforementioned networks, we also generated a synthetic dataset, called EgoGTA, and an in-the-wild dataset based on EgoPW, called EgoPW-Scene. The experimental results of our new evaluation sequences show that the predicted 3D egocentric poses are accurate and physically plausible in terms of human-scene interaction, demonstrating that our method outperforms the state-of-the-art methods both quantitatively and qualitatively.
translated by 谷歌翻译
Can we make virtual characters in a scene interact with their surrounding objects through simple instructions? Is it possible to synthesize such motion plausibly with a diverse set of objects and instructions? Inspired by these questions, we present the first framework to synthesize the full-body motion of virtual human characters performing specified actions with 3D objects placed within their reach. Our system takes as input textual instructions specifying the objects and the associated intentions of the virtual characters and outputs diverse sequences of full-body motions. This is in contrast to existing work, where full-body action synthesis methods generally do not consider object interactions, and human-object interaction methods focus mainly on synthesizing hand or finger movements for grasping objects. We accomplish our objective by designing an intent-driven full-body motion generator, which uses a pair of decoupled conditional variational autoencoders (CVAE) to learn the motion of the body parts in an autoregressive manner. We also optimize for the positions of the objects with six degrees of freedom (6DoF) such that they plausibly fit within the hands of the synthesized characters. We compare our proposed method with the existing methods of motion synthesis and establish a new and stronger state-of-the-art for the task of intent-driven motion synthesis. Through a user study, we further show that our synthesized full-body motions appear more realistic to the participants in more than 80% of scenarios compared to the current state-of-the-art methods, and are perceived to be as good as the ground truth on several occasions.
translated by 谷歌翻译
Recent methods for neural surface representation and rendering, for example NeuS, have demonstrated remarkably high-quality reconstruction of static scenes. However, the training of NeuS takes an extremely long time (8 hours), which makes it almost impossible to apply them to dynamic scenes with thousands of frames. We propose a fast neural surface reconstruction approach, called NeuS2, which achieves two orders of magnitude improvement in terms of acceleration without compromising reconstruction quality. To accelerate the training process, we integrate multi-resolution hash encodings into a neural surface representation and implement our whole algorithm in CUDA. We also present a lightweight calculation of second-order derivatives tailored to our networks (i.e., ReLU-based MLPs), which achieves a factor two speed up. To further stabilize training, a progressive learning strategy is proposed to optimize multi-resolution hash encodings from coarse to fine. In addition, we extend our method for reconstructing dynamic scenes with an incremental training strategy. Our experiments on various datasets demonstrate that NeuS2 significantly outperforms the state-of-the-arts in both surface reconstruction accuracy and training speed. The video is available at https://vcai.mpi-inf.mpg.de/projects/NeuS2/ .
translated by 谷歌翻译
Conventional methods for human motion synthesis are either deterministic or struggle with the trade-off between motion diversity and motion quality. In response to these limitations, we introduce MoFusion, i.e., a new denoising-diffusion-based framework for high-quality conditional human motion synthesis that can generate long, temporally plausible, and semantically accurate motions based on a range of conditioning contexts (such as music and text). We also present ways to introduce well-known kinematic losses for motion plausibility within the motion diffusion framework through our scheduled weighting strategy. The learned latent space can be used for several interactive motion editing applications -- like inbetweening, seed conditioning, and text-based editing -- thus, providing crucial abilities for virtual character animation and robotics. Through comprehensive quantitative evaluations and a perceptual user study, we demonstrate the effectiveness of MoFusion compared to the state of the art on established benchmarks in the literature. We urge the reader to watch our supplementary video and visit https://vcai.mpi-inf.mpg.de/projects/MoFusion.
translated by 谷歌翻译
3D reconstruction and novel view synthesis of dynamic scenes from collections of single views recently gained increased attention. Existing work shows impressive results for synthetic setups and forward-facing real-world data, but is severely limited in the training speed and angular range for generating novel views. This paper addresses these limitations and proposes a new method for full 360{\deg} novel view synthesis of non-rigidly deforming scenes. At the core of our method are: 1) An efficient deformation module that decouples the processing of spatial and temporal information for acceleration at training and inference time; and 2) A static module representing the canonical scene as a fast hash-encoded neural radiance field. We evaluate the proposed approach on the established synthetic D-NeRF benchmark, that enables efficient reconstruction from a single monocular view per time-frame randomly sampled from a full hemisphere. We refer to this form of inputs as monocularized data. To prove its practicality for real-world scenarios, we recorded twelve challenging sequences with human actors by sampling single frames from a synchronized multi-view rig. In both cases, our method is trained significantly faster than previous methods (minutes instead of days) while achieving higher visual accuracy for generated novel views. Our source code and data is available at our project page https://graphics.tu-bs.de/publications/kappel2022fast.
translated by 谷歌翻译
我们提出了一种新方法,以从多个人的一组稀疏的多视图图像中学习通用的动画神经人类表示。学到的表示形式可用于合成一组稀疏相机的任意人的新型视图图像,并通过用户的姿势控制进一步对它们进行动画。尽管现有方法可以推广到新人,也可以通过用户控制合成动画,但它们都不能同时实现。我们将这一成就归因于用于共享多人人类模型的3D代理,并将不同姿势的空间的扭曲延伸到共享的规范姿势空间,在该空间中,我们在其中学习神经领域并预测个人和人物 - 姿势依赖性变形以及从输入图像中提取的特征的外观。为了应对身体形状,姿势和衣服变形的较大变化的复杂性,我们以分离的几何形状和外观设计神经人类模型。此外,我们在空间点和3D代理的表面点上都利用图像特征来预测人和姿势依赖性特性。实验表明,我们的方法在这两个任务上的最先进都大大优于最先进的方法。该视频和代码可在https://talegqz.github.io/neural_novel_actor上获得。
translated by 谷歌翻译
从单眼RGB图像中捕获的3D人类运动捕获符合受试者与复杂且可能可变形的环境的相互作用的相互作用是一个非常具有挑战性,不足和探索不足的问题。现有方法仅薄弱地解决它,并且当人类与场景表面互动时,通常不会建模可能发生的表面变形。相比之下,本文提出了mocapdeform,即单眼3D人体运动捕获的新框架,该框架是第一个明确模拟3D场景的非刚性变形,以改善3D人体姿势估计和可变形环境的重建。 Mocapdeform接受单眼RGB视频,并在相机空间中对齐一个3D场景。它首先使用基于新的射线广播的策略将输入单眼视频中的主题以及密集的触点标签进行定位。接下来,我们的人类环境相互作用约束被利用以共同优化全局3D人类姿势和非刚性表面变形。 Mocapdeform比在几个数据集上的竞争方法获得了更高的精度,包括我们新记录的具有变形背景场景的方法。
translated by 谷歌翻译
我们提出Unrealego,即,一种用于以Egentric 3D人类姿势估计的新的大规模自然主义数据集。Unrealego是基于配备两个鱼眼摄像机的眼镜的高级概念,可用于无约束的环境。我们设计了它们的虚拟原型,并将其附加到3D人体模型中以进行立体视图捕获。接下来,我们会产生大量的人类动作。结果,Unrealego是第一个在现有的EgeCentric数据集中提供最大动作的野外立体声图像的数据集。此外,我们提出了一种新的基准方法,其简单但有效的想法是为立体声输入设计2D关键点估计模块,以改善3D人体姿势估计。广泛的实验表明,我们的方法在定性和定量上优于先前的最新方法。Unrealego和我们的源代码可在我们的项目网页上找到。
translated by 谷歌翻译
给定一组场景的图像,从新颖的观点和照明条件中重新渲染了这个场景是计算机视觉和图形中的一个重要且具有挑战性的问题。一方面,计算机视觉中的大多数现有作品通常对图像形成过程(例如直接照明和预定义的材料,以使场景参数估计可进行。另一方面,成熟的计算机图形工具允许对所有场景参数进行复杂的照片现实光传输的建模。结合了这些方法,我们通过学习神经预先计算的辐射转移功能,提出了一种在新观点下重新考虑的场景方法,该方法使用新颖的环境图隐含地处理全球照明效应。在单个未知的照明条件下,我们的方法可以仅在场景的一组真实图像上进行监督。为了消除训练期间的任务,我们在训练过程中紧密整合了可区分的路径示踪剂,并提出了合成的OLAT和真实图像丢失的组合。结果表明,场景参数的恢复分离在目前的现状,因此,我们的重新渲染结果也更加现实和准确。
translated by 谷歌翻译